1. Mission: Peace Through Scalable Cooperation
Overview
The United Nations was founded on one core principle: peace through cooperation. Our mission has always been to connect nations, enable dialogue, and coordinate solutions that no single state could achieve alone.
Today, that same mission extends into the digital domain. Where once we convened diplomats, we now connect systems of intelligence. Where once we mediated state-to-state conflict, we now mediate between algorithms, data, and platforms.
Governance, in this context, is not bureaucracy. It is the infrastructure of peace—ensuring that artificial intelligence strengthens human collaboration rather than replacing it.
Objectives
- Align AI adoption with UN values of trust, transparency, and fairness
- Enable interoperable and safe data sharing across Secretariat entities
- Build confidence that AI systems serve humanity, not hierarchy
"In the 20th century, we safeguarded peace through diplomacy. In the 21st century, we safeguard peace through digital cooperation, algorithmic accountability, and data ethics."
1A. Governance Context - The 2025 Landscape
The New UN AI Governance Architecture
In August 2025, the UN General Assembly established two landmark mechanisms:
1. The Global Dialogue on AI Governance
- Inclusive forum for governments and stakeholders
- Addresses safety, security, trustworthiness challenges
- Launched September 2025 during UNGA High-Level Week
- Annual sessions alternating between New York and Geneva
2. The Independent International Scientific Panel on AI
- First global scientific body on AI assessment
- ~40 experts appointed for three-year terms
- Provides evidence-based policy guidance
UN Secretariat's Role
OICT serves as the operational bridge between high-level commitments and practical implementation across UN entities.
Strategic Alignment
| Initiative | Organization | Year |
|---|---|---|
| Global Digital Compact | UN (Pact for the Future) | 2024 |
| Global AI Governance Action Plan | World AI Conference | 2025 |
| Hiroshima AI Process | G7 | 2023 |
| UNESCO Ethics Recommendation | UNESCO (194 members) | 2021 |
| OECD AI Principles | OECD, adopted by G20 | 2019 |
2. The Strategic Frontier - What AI Represents
AI as a Capability Infrastructure
Nations now compete for informational sovereignty: the ability to forecast, decide, act, and communicate at unprecedented scale and speed.
Eight Strategic Capabilities of AI
| Capability | Description | National Advantage | Governance Implication |
|---|---|---|---|
| 1. Predictive Sovereignty | Forecasting crises before escalation | Early warning enables preemptive action | Model transparency, validation, bias audits |
| 2. Cognitive Superiority | Faster decision cycles | Minutes vs. days in crisis response | Human oversight, explainability standards |
| 3. Narrative Control | Shaping information flows via generative AI | Controls perception and messaging | Provenance tracking, authenticity controls |
| 4. Knowledge Aggregation | Synthesizing global data into intelligence | Transforms fragmented info to insight | Data-sharing ethics, access equity |
| 5. Algorithmic Economic Power | Optimizing production and logistics | Efficiency and cost reduction | Access fairness, antitrust considerations |
| 6. Cyber-Physical Autonomy | AI in drones, grids, transportation | Autonomous operations at scale | Safety protocols, human-in-loop governance |
| 7. Geospatial Intelligence | AI-enhanced satellite/drone monitoring | Unprecedented situational awareness | Cross-border governance, privacy protection |
| 8. Agentic AI | Self-coordinating autonomous systems | Future AI-to-AI diplomacy | Trust protocols, autonomous audit mechanisms |
Impact Quantification
Without Governance
- Predictive bias: $100M+ misallocation
- Unverified geospatial AI: false security alerts
- Algorithmic power: deepened inequality
With Governance
- Crisis response: 40-60% faster
- Decision accuracy: +25-35%
- Trust in AI: +50-70%
Strategic Imperative: The UN Secretariat ensures these capabilities serve peace, equity, and cooperation—not dominance.
3. Framework - Stackable Governance Architecture
Our Model
We govern AI through stackable architecture: independent modules that interlock, scale, and adapt.
The Six-Layer Governance Stack
Framework Alignment
| Module | EU AI Act | NIST RMF | ISO 42001 |
|---|---|---|---|
| Compliance & Ethics | Articles 5-7 | Govern | Clause 4.4, 5 |
| Risk Management | Articles 9-15 | Map, Measure, Manage | Clause 6, 8 |
| Infrastructure | Article 10 | Data Governance | Clause 7.5 |
| Model Oversight | Articles 11-12 | Measure | Clause 8.2 |
| Performance | Article 15 | Measure | Clause 9 |
| Communication | Article 13 | Culture | Clause 7.4 |
Implementation Path
- Minimum Viable: Layers 1-3 (start here)
- 6 Months: Add Layers 4-5
- 12 Months: Full stack with Layer 6
4. Capabilities - How We Govern
Three Governance Pillars
Pillar 1: Accountability
Clear ownership, traceability, and responsibility for every AI system.
- AI System Registry: Unique identifier, owner, documentation
- Stewardship Model: Named individuals responsible
- Audit Trails: Complete logging of decisions and changes
- Incident Response: Clear escalation paths
Pillar 2: Policies
Codified standards and automated enforcement.
- Technical: Data access, model training, deployment standards
- Ethical: Fairness, bias mitigation, human rights
- Operational: Usage guidelines, incident reporting
- Vendor: Third-party AI evaluation, contract requirements
Pillar 3: Partnerships
Collaborative relationships enabling shared governance.
| Partner Type | Examples | Contribution |
|---|---|---|
| Hyperscalers | Microsoft, Google, AWS | Co-development of controls, early access |
| Academia | Research institutions | Independent validation, innovation |
| Member States | UN member countries | Joint testbeds, capacity building |
| Civil Society | NGOs, advocacy groups | Ethics review, accountability |
Platform Orchestrator Model
We no longer just set policy—we orchestrate a global governance platform.
- Convene diverse stakeholders
- Standardize frameworks and protocols
- Validate AI systems independently
- Enable tools, training, resources
- Monitor governance effectiveness
- Evolve continuously
4A. Decision Framework - When to Apply Governance
Risk Assessment Dimensions
| Dimension | Range | Governance Impact |
|---|---|---|
| Impact Scope | Individual → Global | Wider scope = higher level |
| Decision Consequence | Informational → Automated | More automation = more oversight |
| Data Sensitivity | Public → Restricted | Higher sensitivity = stricter controls |
| Vulnerability Context | General → Conflict Zone | Vulnerable populations = max safeguards |
Five Governance Levels
10-Question Rapid Assessment
Score each question 1-5 points (1=lowest risk, 5=highest):
- How many people affected? (1-10 people = 1pt → 10K+ = 5pts)
- Data type? (Public = 1pt → Special category = 5pts)
- Automation level? (Info only = 1pt → Fully automated = 5pts)
- Reversibility? (Instant = 1pt → Irreversible = 5pts)
- Vulnerable populations? (None = 1pt → Life-sustaining/crisis = 5pts)
- Explainability? (Fully transparent = 1pt → Black box = 5pts)
- Bias potential? (None = 1pt → Extreme risk = 5pts)
- Cybersecurity risk? (Public/non-sensitive = 1pt → National security = 5pts)
- Operational criticality? (Nice to have = 1pt → Mission-critical = 5pts)
- Political sensitivity? (None = 1pt → Diplomatic implications = 5pts)
Pre-Deployment Checklist
- ☐ Risk assessment completed
- ☐ AI Steward assigned
- ☐ Required governance modules implemented
- ☐ Data sensitivity classified
- ☐ Model documentation completed
- ☐ Bias testing conducted
- ☐ Cybersecurity review passed
- ☐ User training prepared
- ☐ Incident response plan documented
- ☐ Monitoring configured
5. Governing Entity - OICT Policy, Strategy & Governance Division
About the Division
AI governance is led by the Policy, Strategy & Governance Division within the Office of Information and Communications Technology (OICT).
Global Footprint
| Location | Role | Focus Areas |
|---|---|---|
| New York, USA | Headquarters | Strategic oversight, policy development, partnership management |
| Bangkok, Thailand | Asia-Pacific Hub | Regional operations, multilingual AI development |
| Valencia, Spain | Global Service Centre | European operations, GDPR compliance, data center services |
| Brindisi, Italy | Global Service Centre | Field support operations, peacekeeping mission ICT, logistics support |
Our Data Landscape
Specialized Datasets
- Humanitarian Data Exchange (HDX) - Crisis and development data
- UNDRR Risk Data - Disaster risk and resilience information
- Geospatial Data - Satellite imagery from UN-SPIDER and UNOOSA
- Peacekeeping Mission Data - Operational field mission data
- Diplomatic Archives - Historical treaties, resolutions, negotiations
Service Lines
| Service | Description |
|---|---|
| Enterprise Solutions | M365 deployment (including Copilot), ERP, business intelligence |
| ICT Security | CSOC, identity management, DLP, incident response |
| Field Support | Peacekeeping ICT, humanitarian emergency tech, remote deployment |
| Business Relationship | Liaison with departments, requirements gathering, change management |
6. Governance Agenda - Strategic Questions
Key Strategic Questions
Question 1: Who governs what Copilot can learn?
Microsoft 365 Copilot now has access to petabytes of UN data. Without governance, we risk:
- Uncontrolled training on sensitive diplomatic communications
- Exposure of confidential information through AI-generated responses
- Loss of data sovereignty to commercial AI systems
Question 2: Can we innovate safely?
Our highest-value enterprise data is also our highest risk:
- Balancing AI innovation with security requirements
- Enabling productivity gains without compromising safety
- Managing vendor dependencies while maintaining control
Question 3: Will AI serve all UN languages equally?
The UN holds the world's largest multilingual diplomatic corpus:
- Commercial AI models often underperform in non-English languages
- Risk of perpetuating English-language dominance
- Need for specialized models trained on diplomatic language across all six official UN languages
Question 4: How is UN data represented in external models?
When commercial AI references "UN data," our credibility is at stake:
- Ensuring accurate representation of UN positions and data
- Preventing misattribution or misinterpretation
- Maintaining authority over official UN information
These are not hypothetical issues—they define the Secretariat's governance roadmap.
7. Challenges - Current Realities
Six Governance Roadblocks
1. Loss of Data Control
UN data used in commercial AI training without consent or oversight.
- Public UN datasets scraped for model training
- No attribution or accuracy verification
- Risk of perpetuating outdated or incorrect information
2. Vendor Dependence
Governance tied to vendor roadmaps and privacy policies.
- Limited control over feature deployment timing
- Dependency on vendor governance capabilities
- Risk of vendor lock-in affecting governance portability
3. Data Leakage Risk
Using third-party AI services may inadvertently train external models.
- Unclear data handling practices by AI providers
- Risk of sensitive information exposure through AI outputs
- Need for contractual safeguards and technical controls
4. Federated Ownership Constraints
Fragmented data ownership prevents full AI access and deployment.
- 18-25 entities with different priorities and systems
- No single "owner" requires coordination mechanisms
- Varying technical capacity across entities
5. Language Equity Gaps
AI models underperform in non-English and low-resource contexts.
- Commercial models optimized for English
- Limited training data in other UN official languages
- Risk of perpetuating language-based inequities
6. Ethical Integration at Speed
Humanitarian missions provide ideal settings, but require governance frameworks to match pace and scale.
- Rapid deployment needs vs. thorough governance
- Vulnerable population protection requirements
- Balancing innovation with ethical safeguards
Mitigation Approach: Sociotechnical solutions combining policy, risk assessment, ethics frameworks, and technical standards—recognizing the landscape evolves rapidly.
8. Countermeasures - How We're Responding
Legal & Policy Actions
- UN-specific AI terms with commercial partners
- Data residency controls and opt-out mechanisms for AI training
- UN AI data licensing framework and attribution requirements
- robots.txt / HTTP headers and API rate enforcement for public datasets
Technical Safeguards
- Tiered AI access based on data sensitivity
- Secure AI sandboxes for high-risk work
- Audit trails and explainability standards
- AI-specific data access governance with automated controls
Language & Equity Initiatives
- Multilingual AI training across all six official UN languages (Arabic, Chinese, English, French, Russian, Spanish)
- Language-equity requirements in vendor contracts
- Diplomatic domain-specific models trained on UN corpus
- Shared datasets with developing countries for capacity building
Procurement & Vendor Oversight
- Standard AI contractual clauses in every tech agreement
- Mandatory pre-deployment evaluation of AI features
- Contractual exit/portability rights for data and models
- Vendor performance monitoring against governance standards
Organizational Evolution
- Global expansion of governance partnerships
- Personnel relocation to operational hubs for real-time oversight
- Cross-functional teams spanning legal, technical, and policy domains
- Continuous learning from operational experience
Result: A resilient governance ecosystem—adaptive, accountable, and global.
9. Implementation Pathway - Governance Maturity Model
Five-Level Maturity Model
| Level | Focus Area | Outcome | Timeframe |
|---|---|---|---|
| Level 1 | Policy & Compliance | Baseline AI controls established | 0-6 months |
| Level 2 | Risk & Data Management | Standard AI safeguards operational | 6-12 months |
| Level 3 | Cross-Entity Governance | Shared frameworks & data standards | 12-18 months |
| Level 4 | Global Interoperability | Multi-jurisdictional AI scaling | 18-30 months |
| Level 5 | Ethical Intelligence | Continuous trust governance & global alignment | 30+ months |
Implementation Phases
Phase 1: Foundation (Months 1-6)
- Establish governance structure and leadership
- Deploy Layers 1-3 of governance stack
- Create AI system registry
- Develop initial policies and procedures
- Begin stakeholder engagement
Phase 2: Operationalization (Months 6-12)
- Add Layers 4-5 to governance stack
- Implement automated controls and monitoring
- Launch AI Steward network across entities
- Begin multilingual AI initiatives
- Establish vendor governance protocols
Phase 3: Scaling (Months 12-24)
- Complete full governance stack (Layer 6)
- Expand to all Secretariat entities
- Launch external partnerships and testbeds
- Publish transparency reports
- Begin capacity building for Member States
Phase 4: Leadership (Months 24+)
- Drive global governance standards
- Enable AI-powered multilateralism
- Share best practices internationally
- Continuous improvement and innovation
Success Metrics: % systems governed, incident response time, stakeholder trust scores, language equity measures, vendor compliance rate
10. Conclusion - Governance as the Infrastructure of Peace
Core Message
AI governance is not a barrier to innovation—it is the foundation that makes innovation sustainable.
Our Mission Continues
In the 20th century, the United Nations safeguarded peace through diplomacy.
In the 21st century, we safeguard peace through digital cooperation, algorithmic accountability, and data ethics.
We are not governing code.
We are governing power, knowledge, and trust.
Call to Action
"Every dataset, every model, and every decision can contribute to peace—if it is governed well."
This playbook invites all Secretariat entities and partners to join a coordinated effort: building a trustworthy, multilingual, globally interoperable AI future.
Next Steps
- Assess your current AI governance maturity using the framework in Section 9
- Identify governance gaps using the decision framework in Section 4A
- Engage with OICT to access governance tools and resources
- Join UN80 working groups to contribute to global AI governance
- Share your learnings to strengthen the collective governance ecosystem
Three Horizons
| Horizon | Timeframe | Focus | Goals |
|---|---|---|---|
| Horizon 1 | 2025-2026 | Stabilize | Establish baseline governance, secure current systems |
| Horizon 2 | 2026-2028 | Scale | Expand across Secretariat, deepen partnerships |
| Horizon 3 | 2028-2030 | Lead | Drive global standards, enable AI-powered multilateralism |
"AI governance is the infrastructure of peace."